翻訳と辞書
Words near each other
・ Degree of start-stop distortion
・ Degree of truth
・ Degree of unsaturation
・ Degree programmes of the University of Groningen
・ Degree symbol
・ Degree-constrained spanning tree
・ Degree-preserving randomization
・ Degreed
・ Degrees north
・ Degrees of Certainty
・ Degrees of Connection
・ Degrees of Eastern Orthodox monasticism
・ Degrees of freedom
・ Degrees of freedom (mechanics)
・ Degrees of freedom (physics and chemistry)
Degrees of freedom (statistics)
・ Degrees of freedom problem
・ Degrees of glory
・ Degrees of the University of Oxford
・ Degrees-R-Us
・ Degreeting
・ DeGrenier
・ Degressive proportionality
・ Degrmen
・ Degron
・ DeGroot
・ DeGroot learning
・ DeGroote School of Business
・ Degrowth
・ Degré


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Degrees of freedom (statistics) : ウィキペディア英語版
Degrees of freedom (statistics)

In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.
The number of independent ways by which a dynamic system can move, without violating any constraint imposed on it, is called ''number of degrees of freedom''. In other words, the number of degrees of freedom can be defined as the minimum number of independent coordinates that can specify the position of the system completely.
Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter are called the degrees of freedom. In general, the degrees of freedom of an estimate of a parameter are equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself (i.e. the sample variance has ''N''-1 degrees of freedom, since it is computed from ''N'' random scores minus the only 1 parameter estimated as intermediate step, which is the sample mean).
Mathematically, degrees of freedom is the number of dimensions of the domain of a random vector, or essentially the number of "free" components (how many components need to be known before the vector is fully determined).
The term is most often used in the context of linear models (linear regression, analysis of variance), where certain random vectors are constrained to lie in linear subspaces, and the number of degrees of freedom is the dimension of the subspace. The degrees of freedom are also commonly associated with the squared lengths (or "sum of squares" of the coordinates) of such vectors, and the parameters of chi-squared and other distributions that arise in associated statistical testing problems.
While introductory textbooks may introduce degrees of freedom as distribution parameters or through hypothesis testing, it is the underlying geometry that defines degrees of freedom, and is critical to a proper understanding of the concept. Walker (1940) has stated this succinctly as "the number of observations minus the number of necessary relations among these observations."
==Conceptual history==
Although the basic concept of degrees of freedom was recognized as early as 1821, in the work of astronomer and mathematician Carl Friedrich Gauss, its modern definition and usage was first elaborated by English statistician William Sealy Gosset, in his 1908 Biometrika article "The Probable Error of a Mean", published under the pen name "Student". While Gosset did not actually use the term 'degrees of freedom', he explained the concept in the course of developing what became known as Student's t-distribution. The term itself was popularized by English statistician and biologist Ronald Fisher, beginning with his 1922 work on chi squares.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Degrees of freedom (statistics)」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.